Artiste is a European project developing a cross-collection search system for art galleries and museums. It combines image content retrieval with text based retrieval and uses RDF mappings in order to integrate diverse databases. The test sites of the Louvre, Victoria and Albert Museum, Uffizi Gallery and National Gallery London provide their own database schema for existing metadata, avoiding the need for migration to a common schema. The system will accept a query based on one museumÔÇÖs fields and convert them, through an RDF mapping into a form suitable for querying the other collections. The nature of some of the image processing algorithms means that the system can be slow for some computations, so the system is session-based to allow the user to return to the results later. The system has been built within a J2EE/EJB framework, using the Jboss Enterprise Application Server.
Secondary Title
WWW2002: The Eleventh International World Wide Web Conference
Publisher
International World Wide Web Conference Committee
ISBN
1-880672-20-0
Critical Arguements
CA "A key aim is to make a unified retrieval system which is targeted to usersÔÇÖ real requirements and which is usable with integrated cross-collection searching. Museums and Galleries often have several digital collections ranging from public access images to specialised scientific images used for conservation purposes. Access from one gallery to another was not common in terms of textual data and not done at all in terms of image-based queries. However the value of cross-collection access is recognised as important for example in comparing treatments and conditions of paintings. While ARTISTE is primarily designed for inter-museum searching it could equally be applied to museum intranets. Within a MuseumÔÇÖs intranet there may be systems which are not interlinked due to local management issues."
Conclusions
RQ "The query language for this type of system is not yet standardised but we hope that an emerging standard will provide the session-based connectivity this application seems to require due to the possibility of long query times." ... "In the near future, the project will be introducing controlled vocabulary support for some of the metadata fields. This will not only make retrieval more robust but will also facilitate query expansion. The LouvreÔÇÖs multilingual thesaurus will be used in order to ensure greater interoperability. The system is easily extensible to other multimedia types such as audio and video (eg by adding additional query items such as "dialog" and "video sequence" with appropriate analysers). A follow-up project is scheduled to explore this further. There is some scope for relating our RDF query format to the emerging query standards such as XQuery and we also plan to feed our experience into standards such as the ZNG initiative.
SOW
DC "The Artiste project is a European Commission funded collaboration, investigating the use of integrated content and metadata-based image retrieval across disparate databases in several major art galleries across Europe. Collaborating galleries include the Louvre in Paris, the Victoria and Albert Museum in London, the Uffizi Gallery in Florence and the National Gallery in London." ... "Artiste is funded by the European CommunityÔÇÖs Framework 5 programme. The partners are: NCR, The University of Southampton, IT Innovation, Giunti Multimedia, The Victoria and Albert Museum, The National Gallery, The research laboratory of the museums of France (C2RMF) and the Uffizi Gallery. We would particularly like to thank our collaborators Christian Lahanier, James Stevenson, Marco Cappellini, John Cupitt, Raphaela Rimabosci, Gert Presutti, Warren Stirling, Fabrizio Giorgini and Roberto Vacaro."
Type
Journal
Title
Archival Issues in Network Electronic Publications
"Archives are retained information systems that are developed according to professional principles to meet anticipated demands of user clienteles in the context of the changing conditions created by legal environments and electronic or digital technologies. This article addresses issues in electronic publishing, including authentication, mutability, reformatting, preservation, and standards from an archival perspective. To ensure continuing access to electronically published texts, a special emphasis is placed on policy planning in the development and implementation of electronic systems" (p.701).
Critical Arguements
<P1> Archives are established, administered, and evaluated by institutions, organizations, and individuals to ensure the retention, preservation, and utilization of archival holdings (p.701) <P2> The three principal categories of archival materials are official files of institutions and organizations, publications issued by such bodies, and personal papers of individuals. . . . Electronic information technologies have had profound effects on aspects of all these categories (p.702) <P3> The primary archival concern with regard to electronic publishing is that the published material should be transferred to archival custody. When the transfer occurs, the archivist must address the issues of authentication, appraisal, arrangement, description, and preservation or physical protection (p.702) <P4> The most effective way to satisfy archival requirements for handling electronic information is the establishment of procedures and standards to ensure that valuable material is promptly transferred to archival custody in a format which will permit access on equipment that will be readily available in the future (p.702) <P5> Long-term costs and access requirements are the crucial factors in determining how much information should be retained in electronic formats (p.703) <P6> Authentication involves a determination of the validity or integrity of information. Integrity requires the unbroked custody of a body of information by a responsible authority or individual <warrant> (p.703) <P7> From an archival perspective, the value of information is dependent on its content and the custodial responsibility of the agency that maintains it -- e.g., the source determines authenticity. The authentication of archival information requires that it be verified as to source, date, and content <warrant> (p.704) <P8> Information that is mutable, modifiable, or changeable loses its validity if the persons adding, altering, or deleting information cannot be identified and the time, place and nature of the changes is unknown (p.704) <P9> [P]reservation is more a matter of access to information than it is a question of survival of any physical information storage media (p.704) <P10> [T]o approach the preservation of electronic texts by focusing on physical threats will miss the far more pressing matter of ensuring continued accessibility to the information on such storage media (p.706) <P11> If the information is to remain accessible as long as paper, preservation must be a front-end, rather than an ex post facto, action (p.708) <P12> [T]he preservation of electronic texts is first and foremost a matter of editorial and administrative policy rather than of techniques and materials (p.708) <P13> Ultimately, the preservation of electronic publications cannot be solely an archival issue but an administrative one that can be addressed only if the creators and publishers take an active role in providing resources necessary to ensure that ongoing accesibility is part of initial system and product design (p.709) <P14> An encouraging development is that SGML has been considered to be a critical element for electronic publishing because of its transportability and because it supports multiple representations of a single text . . . (p.711) <P15> Underlying all questions of access is the fundamental consideration of cost (p.711)
Type
Journal
Title
Grasping the Nettle: The Evolution of Australian Archives Electronic Records Policy
CA An overview of the development of electronic records policy at the Australian Archives.
Phrases
<P1> The notion of records being independent of format and of "virtual" records opens up a completely new focus on what it is that archival institutions are attempting to preserve. (p. 136) <P2> The import of Bearman's contention that not all infomation systems are recordkeeping systems challenges archivists to move attention away from managing archival records after the fact toward involvement in the creation phase of records, i.e., in the systems design and implementation process. (p. 139) <P3> The experience of the Australian Archives is but one slice of a very large pie, but I think it is a good indication of the challenges other institutions are facing internationally. (p. 144)
Conclusions
RQ How has the Australian Archives managed the transition from paper to electronic records? What issues were raised and how were they dealt with?
Type
Electronic Journal
Title
Keeping Memory Alive: Practices for Preserving Digital Content at the National Digital Library Program of the Library of Congress
CA An overview of the major issues and initiatives in digital preservation at the Library of Congress. "In the medium term, the National Digital Library Program is focusing on two operational approaches. First, steps are taken during conversion that are likely to make migration or emulation less costly when they are needed. Second, the bit streams generated by the conversion process are kept alive through replication and routine refreshing supported by integrity checks. The practices described here provide examples of how those steps are implemented to keep the content of American Memory alive."
Phrases
<P1> The practices described here should not be seen as policies of the Library of Congress; nor are they suggested as best practices in any absolute sense. NDLP regards them as appropriate practices based on real experience, the nature and content of the originals, the primary purposes of the digitization, the state of technology, the availability of resources, the scale of the American Memory digital collection, and the goals of the program. They cover not just the storage of content and associated metadata, but also aspects of initial capture and quality review that support the long-term retention of content digitized from analog sources. <P2> The Library recognizes that digital information resources, whether born digital or converted from analog forms, should be acquired, used, and served alongside traditional resources in the same format or subject area. Such responsibility will include ensuring that effective access is maintained to the digital content through American Memory and via the Library's main catalog and, in coordination with the units responsible for the technical infrastructure, planning migration to new technology when needed. <P3> Refreshing can be carried out in a largely automated fashion on an ongoing basis. Migration, however, will require substantial resources, in a combination of processing time, out-sourced contracts, and staff time. Choice of appropriate formats for digital masters will defer the need for large-scale migration. Integrity checks and appropriate capture of metadata during the initial capture and production process will reduce the resource requirements for future migration steps. <warrant> We can be certain that migration of content to new data formats will be necessary at some point. The future will see industrywide adoption of new data formats with functional advantages over current standards. However, it will be difficult to predict exactly which metadata will be useful to support migration, when migration of master formats will be needed, and the nature and extent of resource needs. Human experts will need to decide when to undertake migration and develop tools for each migration step. <P4> Effective preservation of resources in digital form requires (a) attention early in the life-cycle, at the moment of creation, publication, or acquisition and (b) ongoing management (with attendant costs) to ensure continuing usability. <P5> The National Digital Library Program has identified several categories of metadata needed to support access and management for digital content. Descriptive metadata supports discovery through search and browse functions. Structural metadata supports presentation of complex objects by representing relationships between components, such as sequences of images. In addition, administrative metadata is needed to support management tasks, such as access control, archiving, and migration. Individual metadata elements may support more than one function, but the categorization of elements by function has proved useful. <P6> It has been recognized that metadata representations appropriate for manipulation and long-term retention may not always be appropriate for real-time delivery. <P7> It has also been realized that some basic descriptive metadata (at the very least a title or brief description) should be associated with the structural and administrative metadata. <P8> During 1999, an internal working group reviewed past experience and prototype exercises and compiled a core set of metadata elements that will serve the different functions identified. This set will be tested and refined as part of pilot activities during 2000. <P9> Master formats are well documented and widely deployed, preferably formal standards and preferably non-proprietary. Such choices should minimize the need for future migration or ensure that appropriate and affordable tools for migration will be developed by the industry. <warrant>
Conclusions
RQ "Developing long-term strategies for preserving digital resources presents challenges associated with the uncertainties of technological change. There is currently little experience on which to base predictions of how often migration to new formats will be necessary or desirable or whether emulation will prove cost-effective for certain categories of resources. ... Technological advances, while sure to present new challenges, will also provide new solutions for preserving digital content."
Type
Electronic Journal
Title
A Spectrum of Interoperability: The Site for Science Prototype for the NSDL
"Currently, NSF is funding 64 projects, each making its own contribution to the library, with a total annual budget of about $24 million. Many projects are building collections; others are developing services; a few are carrying out targeted research.The NSDL is a broad program to build a digital library for education in science, mathematics, engineering and technology. It is funded by the National Science Foundation (NSF) Division of Undergraduate Education. . . . The Core Integration task is to ensure that the NSDL is a single coherent library, not simply a set of unrelated activities. In summer 2000, the NSF funded six Core Integration demonstration projects, each lasting a year. One of these grants was to Cornell University and our demonstration is known as Site for Science. It is at http://www.siteforscience.org/ [Site for Science]. In late 2001, the NSF consolidated the Core Integration funding into a single grant for the production release of the NSDL. This grant was made to a collaboration of the University Corporation for Atmospheric Research (UCAR), Columbia University and Cornell University. The technical approach being followed is based heavily on our experience with Site for Science. Therefore this article is both a description of the strategy for interoperability that was developed for Site for Science and an introduction to the architecture being used by the NSDL production team."
ISBN
1082-9873
Critical Arguements
CA "[T]his article is both a description of the strategy for interoperability that was developed for the [Cornell University's NSF-funded] Site for Science and an introduction to the architecture being used by the NSDL production team."
Phrases
<P1> The grand vision is that the NSDL become a comprehensive library of every digital resource that could conceivably be of value to any aspect of education in any branch of science and engineering, both defined very broadly. <P2> Interoperability among heterogeneous collections is a central theme of the Core Integration. The potential collections have a wide variety of data types, metadata standards, protocols, authentication schemes, and business models. <P3> The goal of interoperability is to build coherent services for users, from components that are technically different and managed by different organizations. This requires agreements to cooperate at three levels: technical, content and organizational. <P4> Much of the research of the authors of this paper aims at . . . looking for approaches to interoperability that have low cost of adoption, yet provide substantial functionality. One of these approaches is the metadata harvesting protocol of the Open Archives Initiative (OAI) . . . <P5> For Site for Science, we identified three levels of digital library interoperability: Federation; Harvesting; Gathering. In this list, the top level provides the strongest form of interoperability, but places the greatest burden on participants. The bottom level requires essentially no effort by the participants, but provides a poorer level of interoperability. The Site for Science demonstration concentrated on the harvesting and gathering, because other projects were exploring federation. <P6> In an ideal world all the collections and services that the NSDL wishes to encompass would support an agreed set of standard metadata. The real world is less simple. . . . However, the NSDL does have influence. We can attempt to persuade collections to move along the interoperability curve. <warrant> <P7> The Site for Science metadata strategy is based on two principles. The first is that metadata is too expensive for the Core Integration team to create much of it. Hence, the NSDL has to rely on existing metadata or metadata that can be generated automatically. The second is to make use of as much of the metadata available from collections as possible, knowing that it varies greatly from none to extensive. Based on these principles, Site for Science, and subsequently the entire NSDL, developed the following metadata strategy: Support eight standard formats; Collect all existing metadata in these formats; Provide crosswalks to Dublin Core; Assemble all metadata in a central metadata repository; Expose all metadata records in the repository for service providers to harvest; Concentrate limited human effort on collection-level metadata; Use automatic generation to augment item-level metadata. <P8> The strategy developed by Site for Science and now adopted by the NSDL is to accumulate metadata in the native formats provided by the collections . . . If a collection supports the protocols of the Open Archives Initiative, it must be able to supply unqualified Dublin Core (which is required by the OAI) as well as the native metadata format. <P9> From a computing viewpoint, the metadata repository is the key component of the Site for Science system. The repository can be thought of as a modern variant of the traditional library union catalog, a catalog that holds comprehensive catalog records from a group of libraries. . . . Metadata from all the collections is stored in the repository and made available to providers of NSDL service.
Conclusions
RQ 1 "Can a small team of librarians manage the collection development and metadata strategies for a very large library?" RQ 2 "Can the NSDL actually build services that are significantly more useful than the general web search services?"
Type
Electronic Journal
Title
Primary Sources, Research, and the Internet: The Digital Scriptorium at Duke
First Monday, Peer Reviewed Journal on the Internet
Publication Year
1997
Volume
2
Issue
9
Critical Arguements
CA "As the digital revolution moves us ever closer to the idea of the 'virtual library,' repositories of primary sources and other archival materials have both a special opportunity and responsibility. Since the materials in their custody are, by definition, often unique, these institutions will need to work very carefully with scholars and other researchers to determine what is the most effective way of making this material accessible in a digital environment."
Phrases
<P1> The matter of Internet access to research materials and collections is not one of simply doing what we have always done -- except digitally. It represents instead an opportunity to rethink the fundamental triangular relationship between libraries and archives, their collections, and their users. <P2> Digital information as it exists on the Internet today requires more navigational, contextual, and descriptive data than is currently provided in traditional card catalogs or their more modern electronic equivalent. One simply cannot throw up vast amounts of textual or image-based data onto the World Wide Web and expect existing search engines to make much sense of it or users to be able to digest the results. ... Archivists and manuscript curators have for many years now been providing just that sort of contextual detail in the guides, finding aids, and indexes that they have traditionally prepared for their holdings. <P3> Those involved in the Berkeley project understood that HTML was essentially a presentational encoding scheme and lacked the formal structural and content-based encoding that SGML would offer. <P4> Encoded Archival Description is quickly moving towards become an internationally embraced standard for the encoding of archival metadata in a wide variety of archival repositories and special collections libraries. And the Digital Scriptorium at Duke has become one of the early implementors of this standard. <warrant>
Conclusions
RQ "Duke is currently involved in a project that is funded through NEH and also involves the libraries of Stanford, the University of Virginia, and the University of California-Berkeley. This project (dubbed the "American Heritage Virtual Digital Archives Project") will create a virtual archive of encoded finding aids from all four institutions. This archive will permit seamless searching of these finding aids -- at a highly granular level of detail -- through a single search engine on one site and will, it is hoped, provide a model for a more comprehensive national system in the near future."
CA Describes efforts undertaken at the National Library of New Zealand to ensure preservation of electronic resources.
Phrases
<P1> The National Library Act 1965 provides the legislative framework for the National Library of New Zealand '... to collect, preserve, and make available recorded knowledge, particularly that relating to New Zealand, to supplement and further the work of other libraries in New Zealand, and to enrich the cultural and economic life of New Zealand and its cultural interchanges with other nations.' Legislation currently before Parliament, if enacted, will give the National Library the mandate to collect digital resources for preservation purposes. <warrant> (p. 18) <P2> So, the Library has an organisational commitment and may soon have the legislative environment to support the collection, management and preservation of digital objects. ... The next issue is what needs to be done to ensure that a viable preservation programme can actually be put in place. (p. 18) <P3> As the Library had already begun systematising its approach to resource discovery metadata, development of a preservation metadata schema for use within the Library was a logical next step. (p. 18) <P4> Work on the schema was initially informed by other international endeavours relating to preservation metadata, particularly that undertaken by the National Library of Australia. Initiatives through the CEDARS programme, OCLC/RLG activities and the emerging consensus regarding the role of the OAIS Reference Model ... were also taken into account. <warrant> (p. 18-19) <P5> The Library's Preservation Metadata schema is designed to strike a balance between the principles of preservation metadata, as expressed through the OAIS Information Model, and the practicalities of implementing a working set of preservation metadata. The same incentive informs a recent OCLC/RLG report on the OAIS model. (p. 19) <P6> [I]t is unlikely that anything resembling a comprehensive schema will become available in the short term. However, the need is pressing. (p. 19) <P7> The development of the preservation metadata schema is one component of an ongoing programme of activities needed to ensure the incorporation of digital material into the Library's core business processes with a view to the long-term accessibility of those resources. <warrant> (p. 19) <P8> The aim of the above activities is for the Library to be acknowledged as a 'trusted repository' for digital material which ensures the viability and authenticity of digital objects over time. (p. 20) <P9> The Library will also have to develop relationships with other organisations that might wish to achieve 'trusted repository' status in a country with a small population base and few agencies of appropriate size, funding and willingness to take on the role.
Conclusions
RQ There are still a number of important issues to be resolved before the Library's preservation programme can be deemed a success, including the need for: higher level of awareness of the need for digital preservation within the community of 'memory institutions' and more widely; metrics regarding the size and scope of the problem; finance to research and implement digital preservation; new skill sets for implementing digital preservation, e.g. running the multiplicity of hardware/software involved, digital conservation/archaeology; agreed international approaches to digital preservation; practical models to match the high level conceptual work already undertaken internationally; co-operation/collaboration between the wider range of agents potentially able to assist in developing digital preservation solutions, e.g. the computing industry; and, last but not least, clarity around intellectual property, copyright, privacy and moral rights.
SOW
DC OAIS emerged out of an initiative spearheaded by NASA's Consultative Committee for Space Data Systems. It has been shaped and promoted by the RLG and OCLC. Several international projects have played key roles in shaping the OAIS model and adapting it for use in libraries, archives and research repositories. OAIS-modeled repositories include the CEDARS Project, Harvard's Digital Repository, Koninklijke Bibliotheek (KB), the Library of Congress' Archival Information Package for audiovisual materials, MIT's D-Space, OCLC's Digital Archive and TERM: the Texas Email Repository Model.
Type
Electronic Journal
Title
Collection-Based Persistent Digital Archives - Part 1
The preservation of digital information for long periods of time is becoming feasible through the integration of archival storage technology from supercomputer centers, data grid technology from the computer science community, information models from the digital library community, and preservation models from the archivistÔÇÖs community. The supercomputer centers provide the technology needed to store the immense amounts of digital data that are being created, while the digital library community provides the mechanisms to define the context needed to interpret the data. The coordination of these technologies with preservation and management policies defines the infrastructure for a collection-based persistent archive. This paper defines an approach for maintaining digital data for hundreds of years through development of an environment that supports migration of collections onto new software systems.
ISBN
1082-9873
Critical Arguements
CA "Supercomputer centers, digital libraries, and archival storage communities have common persistent archival storage requirements. Each of these communities is building software infrastructure to organize and store large collections of data. An emerging common requirement is the ability to maintain data collections for long periods of time. The challenge is to maintain the ability to discover, access, and display digital objects that are stored within an archive, while the technology used to manage the archive evolves. We have implemented an approach based upon the storage of the digital objects that comprise the collection, augmented with the meta-data attributes needed to dynamically recreate the data collection. This approach builds upon the technology needed to support extensible database schema, which in turn enables the creation of data handling systems that interconnect legacy storage systems."
Phrases
<P1> The ultimate goal is to preserve not only the bits associated with the original data, but also the context that permits the data to be interpreted. <warrant> <P2> We rely on the use of collections to define the context to associate with digital data. The context is defined through the creation of semi-structured representations for both the digital objects and the associated data collection. <P3>A collection-based persistent archive is therefore one in which the organization of the collection is archived simultaneously with the digital objects that comprise the collection. <P4> The goal is to preserve digital information for at least 400 years. This paper examines the technical issues that must be addressed and presents a prototype implementation. <P5>Digital object representation. Every digital object has attributes that define its structure, physical context, and provenance, and annotations that describe features of interest within the object. Since the set of attributes (such as annotations) will vary across all objects within a collection, a semi-structured representation is needed. Not all digital objects will have the same set of associated attributes. <P6> If possible, a common information model should be used to reference the attributes associated with the digital objects, the collection organization, and the presentation interface. An emerging standard for a uniform data exchange model is the eXtended Markup Language (XML). <P7> A particular example of an information model is the XML Document Type Definition (DTD) which provides a description for the allowed nesting structure of XML elements. Richer information models are emerging such as XSchema (which provides data types, inheritance, and more powerful linking mechanisms) and XMI (which provides models for multiple levels of data abstraction). <P8> Although XML DTDs were originally applied to documents only, they are now being applied to arbitrary digital objects, including the collections themselves. More generally, OSDs can be used to define the structure of digital objects, specify inheritance properties of digital objects, and define the collection organization and user interface structure. <P9> A persistent collection therefore needs the following components of an OSD to completely define the collection context: Data dictionary for collection semantics; Digital object structure; Collection structure; and User interface structure. <P10> The re-creation or instantiation of the data collection is done with a software program that uses the schema descriptions that define the digital object and collection structure to generate the collection. The goal is to build a generic program that works with any schema description. <P11> The information for which driver to use for access to a particular data set is maintained in the associated Meta-data Catalog (MCAT). The MCAT system is a database containing information about each data set that is stored in the data storage systems. <P12> The data handling infrastructure developed at SDSC has two components: the SDSC Storage Resource Broker (SRB) that provides federation and access to distributed and diverse storage resources in a heterogeneous computing environment, and the Meta-data Catalog (MCAT) that holds systemic and application or domain-dependent meta-data about the resources and data sets (and users) that are being brokered by the SRB. <P13> A client does not need to remember the physical mapping of a data set. It is stored as meta-data associated with the data set in the MCAT catalog. <P14> A characterization of a relational database requires a description of both the logical organization of attributes (the schema), and a description of the physical organization of attributes into tables. For the persistent archive prototype we used XML DTDs to describe the logical organization. <P15> A combination of the schema and physical organization can be used to define how queries can be decomposed across the multiple tables that are used to hold the meta-data attributes. <P16> By using an XML-based database, it is possible to avoid the need to map between semi-structured and relational organizations of the database attributes. This minimizes the amount of information needed to characterize a collection, and makes the re-creation of the database easier. <warrant> <P17> Digital object attributes are separated into two classes of information within the MCAT: System-level meta-data that provides operational information. These include information about resources (e.g., archival systems, database systems, etc., and their capabilities, protocols, etc.) and data objects (e.g., their formats or types, replication information, location, collection information, etc.); Application-dependent meta-data that provides information specific to particular data sets and their collections (e.g., Dublin Core values for text objects). <P18> Internally, MCAT keeps schema-level meta-data about all of the attributes that are defined. The schema-level attributes are used to define the context for a collection and enable the instantiation of the collection on new technology. <P19> The logical structure should not be confused with database schema and are more general than that. For example, we have implemented the Dublin Core database schema to organize attributes about digitized text. The attributes defined in the logical structure that is associated with the Dublin Core schema contains information about the subject, constraints, and presentation formats that are needed to display the schema along with information about its use and ownership. <P20> The MCAT system supports the publication of schemata associated with data collections, schema extension through the addition or deletion of new attributes, and the dynamic generation of the SQL that corresponds to joins across combinations of attributes. <P21> By adding routines to access the schema-level meta-data from an archive, it is possible to build a collection-based persistent archive. As technology evolves and the software infrastructure is replaced, the MCAT system can support the migration of the collection to the new technology.
Conclusions
RQ Collection-Based Persistent Digital Archives - Part 2
SOW
DC "The technology proposed by SDSC for implementing persistent archives builds upon interactions with many of these groups. Explicit interactions include collaborations with Federal planning groups, the Computational Grid, the digital library community, and individual federal agencies." ... "The data management technology has been developed through multiple federally sponsored projects, including the DARPA project F19628-95-C-0194 "Massive Data Analysis Systems," the DARPA/USPTO project F19628-96-C-0020 "Distributed Object Computation Testbed," the Data Intensive Computing thrust area of the NSF project ASC 96-19020 "National Partnership for Advanced Computational Infrastructure," the NASA Information Power Grid project, and the DOE ASCI/ASAP project "Data Visualization Corridor." Additional projects related to the NSF Digital Library Initiative Phase II and the California Digital Library at the University of California will also support the development of information management technology. This work was supported by a NARA extension to the DARPA/USPTO Distributed Object Computation Testbed, project F19628-96-C-0020."
Type
Electronic Journal
Title
Collection-Based Persistent Digital Archives - Part 2
"Collection-Based Persistent Digital Archives: Part 2" describes the creation of a one million message persistent E-mail collection. It discusses the four major components of a persistent archive system: support for ingestion, archival storage, information discovery, and presentation of the collection. The technology to support each of these processes is still rapidly evolving, and opportunities for further research are identified.
ISBN
1082-9873
Critical Arguements
CA "The multiple migration steps can be broadly classified into a definition phase and a loading phase. The definition phase is infrastructure independent, whereas the loading phase is geared towards materializing the processes needed for migrating the objects onto new technology. We illustrate these steps by providing a detailed description of the actual process used to ingest and load a million-record E-mail collection at the San Diego Supercomputer Center (SDSC). Note that the SDSC processes were written to use the available object-relational databases for organizing the meta-data. In the future, it may be possible to go directly to XML-based databases."
Phrases
<P1> The processes used to ingest a collection, transform it into an infrastructure independent form, and store the collection in an archive comprise the persistent storage steps of a persistent archive. The processes used to recreate the collection on new technology, optimize the database, and recreate the user interface comprise the retrieval steps of a persistent archive. <P2> In order to build a persistent collection, we consider a solution that "abstracts" all aspects of the data and its preservation. In this approach, data object and processes are codified by raising them above the machine/software dependent forms to an abstract format that can be used to recreate the object and the processes in any new desirable forms. <P3> The SDSC infrastructure uses object-relational databases to organize information. This makes data ingestion more complex by requiring the mapping of the XML DTD semi-structured representation onto a relational schema. <P4> The SDSC infrastructure uses object-relational databases to organize information. This makes data ingestion more complex by requiring the mapping of the XML DTD semi-structured representation onto a relational schema. <P5> The steps used to store the persistent archive were: (1) Define Digital Object: define meta-data, define object structure (OBJ-DTD) --- (A), define object DTD to object DDL mapping --- (B) (2) Define Collection: define meta-data, define collection structure (COLL-DTD) --- (C), define collection DTD structure to collection DDL mapping --- (D) (3) Define Containers: define packing format for encapsulating data and meta-data (examples are the AIP standard, Hierarchical Data Format, Document Type Definition) <P5> In the ingestion phase, the relational and semi-structured organization of the meta-data is defined. No database is actually created, only the mapping between the relational organization and the object DTD. <P6> Note that the collection relational organization does not have to encompass all of the attributes that are associated with a digital object. Separate information models are used to describe the objects and the collections. It is possible to take the same set of digital objects and form a new collection with a new relational organization. <P7> Multiple communities across academia, the federal government, and standards groups are exploring strategies for managing very large archives. The persistent archive community needs to maintain interactions with these communities to track development of new strategies for data management and storage. <warrant> <P8>
Conclusions
RQ "The four major components of the persistent archive system are support for ingestion, archival storage, information discovery, and presentation of the collection. The first two components focus on the ingestion of data into collections. The last two focus on access to the resulting collections. The technology to support each of these processes is still rapidly evolving. Hence consensus on standards has not been reached for many of the infrastructure components. At the same time, many of the components are active areas of research. To reach consensus on a feasible collection-based persistent archive, continued research and development is needed. Examples of the many related issues are listed below:
CA Discussion of the challenges faced by librarians and archivists who must determine which and how much of the mass amounts of digitally recorded sound materials to preserve. Identifies various types of digital sound formats and the varying standards to which they are created. Specific challenges discussed include copyright issues; technologies and platforms; digitization and preservation; and metadata and other standards.
Conclusions
RQ "Whether between record companies and archives or with others, some type of collaborative approach to audio preservation will be necessary if significant numbers of audio recordings at risk are to be preserved for posterity. ... One particular risk of preservation programs now is redundancy. ... Inadequate cataloging is a serious impediment to preservation efforts. ... It would be useful to archives, and possibly to intellectual property holders as well, if archives could use existing industry data for the bibliographic control of published recordings and detailed listings of the music recorded on each disc or tape. ... Greater collaboration between libraries and the sound recording industry could result in more comprehensive catalogs that document recording sessions with greater specificity. With access to detailed and authoritative information about the universe of published sound recordings, libraries could devote more resources to surveying their unpublished holdings and collaborate on the construction of a preservation registry to help reduce preservation redundancy. ... Many archivists believe that adequate funding for preservation will not be forthcoming unless and until the recordings preserved can be heard more easily by the public. ... If audio recordings that do not have mass appeal are to be preserved, that responsibility will probably fall to libraries and archives. Within a partnership between archives and intellectual property owners, archives might assume responsibility for preserving less commercial music in return for the ability to share files of preserved historical recordings."
Type
Web Page
Title
Update on the National Digital Infrastructure Initiative
CA Describes progress on a five-year national strategy for preserving digital content.
Conclusions
RQ "These sessions helped us set priorities. Participants agreed about the need for a national preservation strategy. People from industry were receptive to the idea that the public good, as well as their own interests, would be served by coming together to think about long-term preservation. <warrant> They also agreed on the need for some form of distributor-decentralized solution. Like others, they realize that no library can tackle the digital preservation challenge alone. Many parties will need to come together. Participants agreed about the need for digital preservation research, a clearer agenda, a better focus, and a greater appreciation that technology is not necessarily the prime focus. The big challenge might be organizational architecture, i.e., roles and responsibilities. Who is going to do what? How will we reach agreement?"
Type
Web Page
Title
PBCore: Public Broadcasting Metadata Dictionary Project
CA "PBCore is designed to provide -- for television, radio and Web activities -- a standard way of describing and using media (video, audio, text, images, rich interactive learning objects). It allows content to be more easily retrieved and shared among colleagues, software systems, institutions, community and production partners, private citizens, and educators. It can also be used as a guide for the onset of an archival or asset management process at an individual station or institution. ... The Public Broadcasting Metadata Dictionary (PBCore) is: a core set of terms and descriptors (elements) used to create information (metadata) that categorizes or describes media items (sometimes called assets or resources)."
Conclusions
<RQ> The PBCore Metadata Elements are currently in their first published edition, Version 1.0. Over two years of research and lively discussions have generated this version. ... As various users and communities begin to implement the PBCore, updates and refinements to the PBCore are likely to occur. Any changes will be clearly identified, ramifications outlined, and published to our constituents.
SOW
DC "Initial development funding for PBCore was provided by the Corporation for Public Broadcasting. The PBCore is built on the foundation of the Dublin Core (ISO 15836) ... and has been reviewed by the Dublin Core Metadata Initiative Usage Board. ... PBCore was successfully deployed in a number of test implementations in May 2004 in coordination with WGBH, Minnesota Public Radio, PBS, National Public Radio, Kentucky Educational Television, and recognized metadata expert Grace Agnew. As of July 2004 in response to consistent feedback to make metadata standards easy to use, the number of metadata elements was reduced to 48 from the original set of 58 developed by the Metadata Dictionary Team. Also, efforts are ongoing to provide more focused metadata examples that are specific to TV and radio. ... Available free of charge to public broadcasting stations, distributors, vendors, and partners, version 1.0 of PBCore was launched in the first quarter of 2005. See our Licensing Agreement via the Creative Commons for further information. ... Plans are under way to designate an Authority/Maintenance Organization."
Type
Web Page
Title
Metadata for preservation : CEDARS project document AIW01
This report is a review of metadata formats and initiatives in the specific area of digital preservation. It supplements the DESIRE Review of metadata (Dempsey et al. 1997). It is based on a literature review and information picked-up at a number of workshops and meetings and is an attempt to briefly describe the state of the art in the area of metadata for digital preservation.
Critical Arguements
CA "The projects, initiatives and formats reviewed in this report show that much work remains to be done. . . . The adoption of persistent and unique identifiers is vital, both in the CEDARS project and outside. Many of these initiatives mention "wrappers", "containers" and "frameworks". Some thought should be given to how metadata should be integrated with data content in CEDARS. Authenticity (or intellectual preservation) is going to be important. It will be interesting to investigate whether some archivists' concerns with custody or "distributed custody" will have relevance to CEDARS."
Conclusions
RQ Which standards and initiatives described in this document have proved viable preservation metadata models?
SOW
DC OAIS emerged out of an initiative spearheaded by NASA's Consultative Committee for Space Data Systems. It has been shaped and promoted by the RLG and OCLC. Several international projects have played key roles in shaping the OAIS model and adapting it for use in libraries, archives and research repositories. OAIS-modeled repositories include the CEDARS Project, Harvard's Digital Repository, Koninklijke Bibliotheek (KB), the Library of Congress' Archival Information Package for audiovisual materials, MIT's D-Space, OCLC's Digital Archive and TERM: the Texas Email Repository Model.
Type
Web Page
Title
The Gateway to Educational Materials: An Evaluation Study, Year 4: A Technical Report submitted to the US Department of Education
CA The Gateway to Educational Materials (GEM) is a Web site created through the efforts of several groups, including the US Department of Education, The National Library of Education, and a team from Syracuse University. The goal of the project is to provide teachers with a broad range of educational materials on the World Wide Web. This study evaluates The Gateway as an online source of educational information. The purpose of this evaluation is to provide developers of The Gateway with information about aspects of the system that might need improvement, and to display lessons learned through this process to developers of similar systems. It is the fourth in a series of annual studies, and focuses on effectiveness of The Gateway from the perspectives of end users and collection holders.
Just like other memory institutions, libraries will have to play an important part in the Semantic Web. In that context, ontologies and conceptual models in the field of cultural heritage information are crucial, and the interoperability between these ontologies and models perhaps even more crucial. This document reviews four projects and models that the FRBR Review Group recommends for consideration as to interoperability with FRBR.
Publisher
International Federation of Library Associations and Institutions
Critical Arguements
CA "Just like other memory institutions, libraries will have to play an important part in the Semantic Web. In that context, ontologies and conceptual models in the field of cultural heritage information are crucial, and the interoperability between these ontologies and models perhaps even more crucial."
Conclusions
RQ 
SOW
DC "Some members of the CRM-SIG, including Martin Doerr himself, also are subscribers to the FRBR listserv, and Patrick Le Boeuf, chair of the FRBR Review Group, also is a member of the CRM-SIG and ISO TC46/SC4/WG9 (the ISO Group on CRM). A FRBR to CRM mapping is available from the CIDOC CRM-SIG listserv archive." ... This report was produced by the Cataloguing Section of IFLA, the International Federation of Library Associations and Institutions. 
Type
Web Page
Title
Creating and Documenting Text: A Guide to Good Practice
CA "The aim of this Guide is to take users through the basic steps involved in creating and documenting an electronic text or similar digital resource. ... This Guide assumes that the creators of electronic texts have a number of common concerns. For example, that they wish their efforts to remain viable and usable in the long-term, and not to be unduly constrained by the limitations of current hardware and software. Similarly, that they wish others to be able to reuse their work, for the purposes of secondary analysis, extension, or adaptation. They also want the tools, techniques, and standards that they adopt to enable them to capture those aspects of any non-electronic sources which they consider to be significant -- whilst at the same time being practical and cost-effective to implement."
Conclusions
RQ "While a single metadata scheme, adopted and implemented wholescale would be the ideal, it is probable that a proliferation of metadata schemes will emerge and be used by different communities. This makes the current work centred on integrated services and interoperability all the more important. ... The Warwick Framework (http://www.ukoln.ac.uk/metadata/resources/wf.html) for example suggests the concept of a container architecture, which can support the coexistence of several independently developed and maintained metadata packages which may serve other functions (rights management, administrative metadata, etc.). Rather than attempt to provide a metadata scheme for all web resources, the Warwick Framework uses the Dublin Core as a starting point, but allows individual communities to extend this to fit their own subject-specific requirements. This movement towards a more decentralised, modular and community-based solution, where the 'communities of expertise' themselves create the metadata they need has much to offer. In the UK, various funded organisations such as the AHDS (http://ahds.ac.uk/), and projects like ROADS (http://www.ilrt.bris.ac.uk/roads/) and DESIRE (http://www.desire.org/) are all involved in assisting the development of subject-based information gateways that provide metadata-based services tailored to the needs of particular user communities."
Type
Web Page
Title
Descriptive Metadata Guidelines for RLG Cultural Materials
To ensure that the digital collections submitted to RLG Cultural Materials can be discovered and understood, RLG has compiled these Descriptive Metadata Guidelines for contributors. While these guidelines reflect the needs of one particular service, they also represent a case study in information sharing across community and national boundaries. RLG Cultural Materials engages a wide range of contributors with different local practices and institutional priorities. Since it is impossible to find -- and impractical to impose -- one universally applicable standard as a submission format, RLG encourages contributors to follow the suite of standards applicable to their particular community (p.1).
Critical Arguements
CA "These guidelines . . . do not set a new standard for metadata submission, but rather support a baseline that can be met by any number of strategies, enabling participating institutions to leverage their local descriptions. These guidelines also highlight the types of metadata that enhance functionality for RLG Cultural Materials. After a contributor submits a collection, RLG maps that description into the RLG Cultural Materials database using the RLG Cultural Materials data model. This ensures that metadata from the various participant communities is integrated for efficient searching and retrieval" (p.1).
Conclusions
RQ Not applicable.
SOW
DC RLG comprises more than 150 research and cultural memory institutions, and RLG Cultural Materials elicits contributions from countless museums, archives, and libraries from around the world that, although they might retain local descriptive standards and metadata schemas, must conform to the baseline standards prescribed in this document in order to integrate into RLG Cultural Materials. Appendix A represents and evaluates the most common metadata standards with which RLG Cultural Materians is able to work.